Our company sells different kinds of alcohol across regions in Russia. Because there was a success after running a wine promotion in Saint Petersburg, this report suggests further promotions to maximize the profits. However, the current situation allows us to target only a few regions due to a limited budget. By generating more short-term wins, our company will eventually extend the business throughout the country.
With regards to figuring out the current situation, analyzing the micro and macro environment is required. For example, Porter’s five forces model is a common tool to understand a business' micro-environment (Chaffey, 2019, p70). To analyse the impacts of external factors, PESTLE can be used for knowing the macro environment (Brown, 2019, p29). This report will use clustering methods to select the regions with the same wine buying behavior as Saint Petersburg. It is a part of macro environment analysis that our business can earn a higher possibility to succeed by knowing external factors. Apparently, the regions where people have a similar buying trend would be potentially affected by the same wine promotion we proceeded in Saint Petersburg.
Since the budget allows us to pick only 10 regions for the next marketing campaign, this report will not only cluster the regions by historical wine sales at all times but take into account the recent sales of all alcohol. In this way, this report is able to recommend the priority of these 10 regions for the promotions. It will be beneficial for operation management as well as project management in regards to a limited budget.
This report uses Python as the analysis tool. The given dataset contains the sales numbers of 5 kinds of alcohol, wine, beer, vodka, champagne, and brandy, in different regions from 1998 to 2016. It has 1615 rows and 7 columns as below.

This report will transform the dataset into two different tables. They are for two different dimensions of analysis.
The analysis steps will be as follows.
As for identifying a suitable number of clusters, Elbow Method can help determine the proper cluster for KMean clustering based on the sum of squared distance, SSE(Flood, 2018).
#Import modules for EDA
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import missingno as msno
import seaborn as sns
import plotly.express as px
import plotly.graph_objects as go
#Import the .csv file and take a quick view of the dataframe
data = pd.read_csv('https://raw.githubusercontent.com/datacamp/careerhub-data/master/Alcohol%20Consumption%20in%20Russia/alcohol-consumption-in-russia.csv')
data_duplicate = data.copy()
data.head()
#Review the types of columns. It shows there are 1615 rows and 7 columns.
data.info()
# Take a look at the dataset to see whether there is a missing value. (Here is sorted by years)
print(data.isna().sum())
msno.matrix(data)
plt.show()
#Take another look in order to find a pattern of missing values (Sort by regions)
msno.matrix(data.sort_values(['region', 'year']))
plt.show()
It seems there is a pattern. When sorting the dataset by regions, the missing values gather together. Thus, the next step is to figure out what particular regions have missing values. Are these regions' data completely lost in all kinds of alcohol, or are only some data missing? If it is just partially missing, will it still be useful for further analysis?
#List the countries that have missing values.
data_sorted = data.sort_values(['region', 'year'])
data_na = data_sorted[data_sorted['brandy'].isna()]
print(data_na.groupby('region')['region'].unique())
data_na
#Subset the data to contain the last three years' records, and then review whether there is still a missing value.
data_recent = data_duplicate.copy()
data_recent = data_recent[(data_recent['year'] >= 2014) & (data_recent['year'] <= 2016)]
data_recent[data_recent['brandy'].isna()]
#Remove Chechen Republic and Republic of Ingushetia because they still have too many missing values.
data_recent = data_recent[(data_recent['region'] != 'Chechen Republic') & (data_recent['region'] != 'Republic of Ingushetia')]
print(data_recent.info())
msno.matrix(data_recent)
plt.show()
After removing the Chechen Republic and Republic of Ingushetia records, there is no missing value in the table. The first table containing all sales of different alcohol during the last three years is complete so far.
The next step is to prepare the other table containing the wine sales records at all times. Since some regions only lose partial sales records of wine, this step will fill the missing value by an average sales number.
Furthermore, because this table will be used for clustering based on the yearly sales trend, dividing the sales by the maximum sales number will force the sales of different regions to be in the same range. Thus, the clustering result will be affected by relative magnitude instead of the real sales numbers.
#Get a list of all regions
region_list=[]
regions=[region_list.append(x_region) for x_region in data['region'] if x_region not in region_list]
#Fill the missing values of wine by the average numbers
for x in region_list:
mean_num=data.loc[data['region'] == x, 'wine'].mean()
data[data['region'] == x] = data[data['region'] == x].fillna({'wine': mean_num})
#Duplicate the table with the real sales numbers for further use.
data_sales = data.copy()
#Divide the sales by the maximum sales numbers of wine and make sure the maximum value is 1.
for y in region_list:
max_num = data.loc[data['region'] == y, 'wine'].max()
data.loc[data['region'] == y , 'wine'] = (data.loc[data['region']== y, 'wine'] / max_num).round(2)
print('The maximum value in Wine columns:', data['wine'].max())
data
#Review whether there is still a missing value.
print(data.info())
msno.matrix(data)
plt.show()
#Find where the missing values are.
data[data['wine'].isna()]
#Remove Chechen Republic which does not have any sales record of wine, and then remove the records of other alcohol.
data.dropna(subset = ["wine"], inplace=True)
data = data.drop(['beer', 'vodka', 'champagne', 'brandy'], axis=1)
#Make sure whether there is still a missing value.
print(data.isna().sum())
msno.matrix(data)
plt.show()
Now, the second table containing the historical sales of wine is complete. So far, there are two major tables for further clustering analysis as follows.
The given dataset has missing values. Most missing values are from 4 regions, Chechen Republic, Republic of Crimea, Republic of Ingushetia, and Sevastopol. The preparation process falls into two parts because this report would like to cluster the regions by two dimensions, the current sales combination of all alcohol and the wine sales trend.
Two regions do not have sales records during the last three years. Thus, the preparation of the first table is to remove the records of Chechen Republic and Republic of Ingushetia. Then, "data_recent" containing the average sales of different categories during the last three years is ready.
As for the other table, the further analysis aims to cluster the regions based on the sales trend of wine. Since some regions have partial missing records of wine, the preparation was to fill the missing value by an average sales number. Plus, in order to let the cluster by relative magnitude instead of the real sales numbers, the sales numbers were divided by the maximum number of wine sales. Then, "data" is cleaned by removing Chechen Republic, which does not have any sales record of wine.
However, the "data" table still needs transformation to make it suitable for further clustering. Below "data_pivot" is the complete version for analysis.
#Use a pivot table to make years in columns and regions in rows.
data_pivot = data.pivot(index='region', columns='year', values='wine')
data_pivot
#Visualize the pivot table to see the sales trend in relative magnitude.
plt.subplots(figsize=(16, 22))
ax = sns.heatmap(data_pivot, annot=True, fmt='.2f', vmin=0.0, vmax=1.0, cmap='YlOrBr')
plt.title('Wine Sales Rate')
plt.show()
In order to cluster regions by a time series of wine sales, this report will use K-means clustering. The K-means algorithm would allow the clustering process to allocate every data point to the nearest cluster with the shortest distance between points and the centroids (Garbade, 2018). Thus, by using K-means clustering, this report is able to split the regions into different groups having similar buying behavior of wine based on the sales trend.
However, before starting K-means clustering, figuring out the ideal number of clusters is required. Too few clusters would cause the entities in a group to have many different traits. Too many groups would cause that there is not much difference between groups. Therefore, this section proceeds with the Elbow method in order to determine the best number of clusters.
#Import packages for clustering purposes
from scipy.cluster.hierarchy import linkage, dendrogram
from scipy.cluster.hierarchy import fcluster
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from sklearn.preprocessing import normalize
#Visualize the distribution of the sales in different years
sns.pairplot(data_pivot, diag_kind='kde')
plt.show()
"data_pivot" seems the distributions of the variables are not quite skewed. Since K-means clustering works better on variables with almost the same mean and variance, the variables in this table look acceptable for further analysis. However, the below normalization shows the normalized table is also good for K-means clustering. Thus, this report will keep using the normalized table for the analysis.
#Normalize the table and review the mean and standard deviation of the table
scaler=StandardScaler()
scaler.fit(data_pivot)
pivot_normalized = scaler.transform(data_pivot)
pivot_normalized = pd.DataFrame(data=pivot_normalized, index=data_pivot.index, columns=data_pivot.columns)
pivot_normalized.agg(['mean', 'std']).round()
#Visualize the SSE to choose the point with the sharpest angle.
sse={}
for k in range(1, 21):
kmeans = KMeans(n_clusters=k, random_state=1)
kmeans.fit(pivot_normalized)
sse[k] = kmeans.inertia_
sns.pointplot(x=list(sse.keys()), y=list(sse.values()))
plt.title('The Elbow Method')
plt.xlabel('k')
plt.ylabel('SSE')
plt.show()
Regarding the outcome of the Elbow method, it is clear that 4 clusters are the ideal solution for this situation. As a result, this analysis splits regions into 4 clusters and then visualizes the buying behavior of the regions in different clusters. The heatmap shows 17 regions have the same buying behavior of wine as Saint Petersburg that the customers buy more wine between 2003 to 2011.
#Split regions into 4 clusters by K-means clustering.
kmeans = KMeans(n_clusters=4, random_state=44)
kmeans.fit(pivot_normalized)
cluster_labels = kmeans.labels_
data_cluster = data_pivot.assign(cluster=cluster_labels)
data_cluster = data_cluster.reset_index()
data_cluster.columns.name = None
data_cluster.head()
#Find out the cluster number of Saint Petersburg
data_cluster[data_cluster['region'] == 'Saint Petersburg']
#Get the region names where the buying behavior is similar to Saint Petersburg
data_cluster_list = data_cluster.loc[data_cluster['cluster'] == 1, 'region'].to_numpy()
print(len(data_cluster_list))
data_cluster_list
#Visualize the table to understand the buying behavor of different clusters
kmeans_avg = data_cluster.groupby(['cluster']).mean().round(2)
plt.subplots(figsize=(16, 6))
sns.heatmap(kmeans_avg.T, cmap = 'YlGnBu')
plt.show()
With regards to the heatmap above, the regions of cluster 0 have a peak of purchasing wine between 2007 and 2016. Cluster 1 buys more wine from 2003 to 2011. Cluster 2 has steady buying behavior that there are no obviously huge sales in the history. In contrast, cluster 3 has steady high wine sales at all times.
In order to select 10 potential regions from the 17 regions in cluster 1, this section is to pick the most valuable regions by regression analysis and the actual sales numbers. First, the higher the correlation coefficient, the higher the potential profit. A positive correlation coefficient means the sales number increases every year. Thus, the correlation coefficient will be one criterion to select regions.
Another criterion is the actual sales of the regions. The previous clustering uses relative magnitude to cluster regions by the sales trend of wine in years. However, in this way, the regions in cluster 1 may not have as high sales as Saint Petersburg. Thus, further analysis will calculate average sales during the last three years and then rank regions by these sales numbers as well as the correlation coefficients.
#Visualize the regression lines of the regions in cluster 1 in order to take a quick look
plt.subplots(figsize=(16, 8))
g1 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Belgorod Oblast'], x='year', y='wine', order=1, ci=False, label='Belgorod Oblast')
g2 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Chelyabinsk Oblast'], x='year', y='wine', order=1, ci=False, label='Chelyabinsk Oblast')
g3 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Chukotka Autonomous Okrug'], x='year', y='wine', order=1, ci=False, label='Chukotka Autonomous Okrug')
g4 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Kaliningrad Oblast'], x='year', y='wine', order=1, ci=False, label='Kaliningrad Oblast')
g5 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Kirov Oblast'], x='year', y='wine', order=1, ci=False, label='Kirov Oblast')
g6 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Kursk Oblast'], x='year', y='wine', order=1, ci=False, label='Kursk Oblast')
g7 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Leningrad Oblast'], x='year', y='wine', order=1, ci=False, label='Leningrad Oblast')
g8 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Penza Oblast'], x='year', y='wine', order=1, ci=False, label='Penza Oblast')
g9 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Perm Krai'], x='year', y='wine', order=1, ci=False, label='Perm Krai')
g10 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Pskov Oblast'], x='year', y='wine', order=1, ci=False, label='Pskov Oblast')
g11 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Republic of Adygea'], x='year', y='wine', order=1, ci=False, label='Republic of Adygeat')
g12 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Republic of Kalmykia'], x='year', y='wine', order=1, ci=False, label='Republic of Kalmykia')
g13= sns.regplot(data=data_sales.loc[data_sales['region'] == 'Republic of Karelia'], x='year', y='wine', order=1, ci=False, label='Republic of Karelia')
g14 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Republic of Khakassia'], x='year', y='wine', order=1, ci=False, label='Republic of Khakassia')
g15 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Saint Petersburg'], x='year', y='wine', order=1, ci=False, label='Saint Petersburg')
g16 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Sakha (Yakutia) Republic'], x='year', y='wine', order=1, ci=False, label='Sakha (Yakutia) Republic')
g17 = sns.regplot(data=data_sales.loc[data_sales['region'] == 'Sverdlovsk Oblast'], x='year', y='wine', order=1, ci=False, label='Sverdlovsk Oblast')
g1.set_title('Wine Sales', y=1)
g1.set(xlabel='Year', ylabel='Sales in Liter per Capita')
plt.locator_params(axis="x", nbins=10)
plt.xticks(rotation=0)
plt.legend(bbox_to_anchor=(1.01, 1), loc='upper left', borderaxespad=0)
plt.show()
#Make a pivot table for actual sales of wine.
data_sales = data_sales.drop(['beer', 'vodka', 'champagne', 'brandy'], axis=1)
data_sales.dropna(subset = ["wine"], inplace=True)
data_sales_pivot = data_sales.pivot(index='region', columns='year', values='wine')
data_sales_pivot.head()
#Acquire the correlation coefficients of the regions.
data_sales_pivot = data_sales_pivot.reset_index()
data_sales_pivot.columns.name = None
region_list = data_sales_pivot['region'].to_numpy()
lin_reg = LinearRegression()
reg_list=[]
for z in region_list:
reg_df = data_sales[data_sales['region'] == z]
X = reg_df[['year']]
y = reg_df[['wine']]
lin_reg.fit(X, y)
reg_list.append(lin_reg.coef_)
coef_list = np.array(reg_list).flatten()
data_sales_pivot = data_sales_pivot.assign(coef=coef_list)
#Calculate the average sales during the last three years.
data_sales_pivot['avg_wine_12-16'] = data_sales_pivot.iloc[:, 15:20].mean(axis=1).round(2)
data_sales_pivot.head()
#Keep only correlation coefficients and sales.
data_cm = data_sales_pivot.copy(deep=True)
data_cm = data_cm[['region', 'coef', 'avg_wine_12-16']]
data_cm
#Subset the correlation coefficients and sales into 3 levels.
data_cm['coef_level']=pd.qcut(data_cm['coef'].rank(method='first'), q=3, labels=range(1,4))
data_cm['avg_wine_level']=pd.qcut(data_cm['avg_wine_12-16'].rank(method='first'), q=3, labels=range(1,4))
data_cm.head()
#Give a final score by summing up the coefficients and sales levels
data_cm['score'] = data_cm[['coef_level', 'avg_wine_level']].sum(axis=1).astype(int)
def cm_level_func(df):
if df['score'] >= 5:
return 'Top'
elif ((df['score'] >= 3) & (df['score'] < 5)):
return 'Middle'
else:
return 'Low'
data_cm['cm_level'] = data_cm.apply(cm_level_func, axis=1)
data_cm.head()
#Add cluster labels into this data frame.
data_cluster_label = data_cluster[['region', 'cluster']]
data_label_scatter = data_cm.merge(data_cluster_label, how='inner', on = 'region')
#Review the number of regions at different levels.
print(data_label_scatter.groupby('cm_level').agg({'coef': 'mean', 'avg_wine_12-16': ['mean', 'count']}).round(2))
data_label_scatter.head()
#Scatter plot shows the distribution of the regions in different clusters
data_label_scatter['cluster'] = data_label_scatter['cluster'].astype(str)
fig = px.scatter(data_label_scatter, x = 'coef', y='avg_wine_12-16', labels={
"coef": "Coeffiecient",
"avg_wine_12-16": "Average Wine Sales in Liter"
},
color='cluster', hover_data=['region'], color_discrete_map = {"0": "blue", "1": "orange", "2":"red", '3':'black'})
fig.show()
#Sort the top 10 regions by the score number
data_filtered = data_label_scatter[data_label_scatter['region'].isin(data_cluster_list)]
data_top10 = data_filtered.sort_values('score', ascending=False).head(10)
data_top10
So far, by sorting the score numbers, the above table clearly shows the top 10 regions where the next wine promotion should proceed. The below scatter plot demonstrates the distribution of these 10 regions.
#Visualize the distribution of the top 10 regions
fig=go.Figure()
fig.add_trace(
go.Scatter(x=data_cm['coef'], y=data_cm['avg_wine_12-16'], name='All Regions', mode='markers', opacity=0.4)
)
fig.add_trace(
go.Scatter(x=data_top10['coef'], y=data_top10['avg_wine_12-16'], name='Top 10 Regions', mode='markers')
)
annotaion_01={'x':'0.412316', 'y':'11.38', 'showarrow':True, 'arrowhead':4, 'xshift':-2,'yshift':4,'text':'Republic of Karelia', 'textangle':-90, 'font':{'size':10, 'color':'green'}}
annotaion_02={'x':'0.378456', 'y':'7.90', 'showarrow':True, 'arrowhead':4, 'xshift':-2,'yshift':4,'text':'Kirov Oblast', 'textangle':-90,'font':{'size':10, 'color':'green'}}
annotaion_03={'x':'0.272912', 'y':'8.16', 'showarrow':False, 'arrowhead':4, 'xshift':-2,'yshift':48,'text':'Leningrad Oblast', 'textangle':-90,'font':{'size':10, 'color':'green'}}
annotaion_04={'x':'0.211544', 'y':'7.36', 'showarrow':True, 'arrowhead':4, 'xshift':-2,'yshift':4,'text':'Sverdlovsk Oblast', 'textangle':-90,'font':{'size':10, 'color':'green'}}
annotaion_05={'x':'0.307579', 'y':'5.46', 'showarrow':True, 'arrowhead':4, 'xshift':-2,'yshift':4,'text':'Chukotka Autonomous Okrug', 'textangle':-90,'font':{'size':10, 'color':'green'}}
annotaion_06={'x':'0.264070', 'y':'7.92', 'showarrow':False, 'arrowhead':4, 'xshift':-4,'yshift':52,'text':'Kaliningrad Oblast', 'textangle':-90,'font':{'size':10, 'color':'green'}}
annotaion_07={'x':'0.177895', 'y':'9.80', 'showarrow':True, 'arrowhead':4, 'xshift':-2,'yshift':4,'text':'Pskov Oblast', 'textangle':-90,'font':{'size':10, 'color':'green'}}
annotaion_08={'x':'0.231930', 'y':'7.64', 'showarrow':True, 'arrowhead':4, 'xshift':-2,'yshift':4,'text':'Saint Petersburg', 'textangle':-90,'font':{'size':10, 'color':'green'}}
annotaion_09={'x':'0.257754', 'y':'5.44', 'showarrow':True, 'arrowhead':4, 'xshift':-2,'yshift':4,'text':'Kursk Oblast', 'textangle':-90,'font':{'size':10, 'color':'green'}}
annotaion_10={'x':'0.188912', 'y':'6.32', 'showarrow':True, 'arrowhead':4, 'xshift':-2,'yshift':4,'text':'Penza Oblast', 'textangle':-90,'font':{'size':10, 'color':'green'}}
fig.update_layout({
'annotations':[annotaion_01, annotaion_02, annotaion_03,
annotaion_04, annotaion_05, annotaion_06,
annotaion_07, annotaion_08, annotaion_09, annotaion_10],
'showlegend':True, 'legend':{'x':0.02, 'y':0.96, 'bgcolor':'rgb(246, 228, 129)'}
})
fig.update_xaxes(
title_text = "Coeffiecient",
title_font = {"size": 16},
title_standoff = 12)
fig.update_yaxes(
title_text = "Average Wine Sales in Liter",
title_font = {"size": 16},
title_standoff = 12)
fig.show()
The K-mean clustering helps select regions by the sales trend of wine. Also, the coefficients and sales numbers help give scores for ranking. Because of this, this report is able to pick the top 10 regions for the next wine promotion.
However, it is beneficial to see a different result by taking into account all kinds of alcohol. Thus, the next section will use another clustering method to find the regions similar to Saint Petersburg in terms of the sales of different alcohol.
"data_recent" contains the sales records during the last three years when the previous Process phase. Next, this section groups the table by regions and get the average recent sales of different alcohol. By looking at the dendrogram of hierarchical clustering, splitting regions into 4 clusters is an option. If reducing the number of clusters, the hierarchical distance will be too long that there is much difference between clusters.
The cluster with Saint Petersburg has 34 regions. By comparing the hierarchical clustering and K-means clustering results, this report is able to come out 6 high priority regions out of the top 10 regions. This approach is also beneficial for operation and project management to arrange the priority of tasks.
#Group the table by regions and get the average sales during the last three years.
data_recent_group = data_recent.groupby('region').agg({'wine':'mean', 'beer':'mean', 'vodka':'mean', 'champagne':'mean', 'brandy':'mean'}).round(2)
data_recent_group
#Prepare the data for hierarchical clustering
data_features_recent = data_recent_group.iloc[:, 0:5].to_numpy()
data_index_recent = data_recent_group.index.to_numpy()
#Plot a dendrogram
plt.subplots(figsize=(22, 6))
hierarchy_recent = linkage(data_features_recent, method='complete')
dendrogram(hierarchy_recent, labels=data_index_recent, leaf_rotation=90, leaf_font_size=14)
plt.show()
#Split regions into 4 clusters.
recent_labels = fcluster(hierarchy_recent, 4, criterion='maxclust')
data_recent_hierarchy = data_recent_group.assign(hierarchy_cluster=recent_labels)
data_recent_hierarchy
#Search which cluster Saint Petersburg is in.
data_recent_hierarchy = data_recent_hierarchy.reset_index()
data_recent_hierarchy.columns.name = None
data_recent_hierarchy[data_recent_hierarchy['region'] == 'Saint Petersburg']
#Understand how many regions in cluster 3.
print(data_recent_hierarchy[data_recent_hierarchy['hierarchy_cluster'] == 3].shape)
#Find the common regions as a priority in two different clustering results.
data_recent_list = data_recent_hierarchy.loc[data_recent_hierarchy['hierarchy_cluster'] == 3, 'region'].to_numpy()
top_10_list = data_top10['region'].to_numpy()
common_list=[]
not_in_list=[]
for z in top_10_list:
if z in data_recent_list:
common_list.append(z)
else:
not_in_list.append(z)
print('Top 10 Regions in both lists:', common_list)
print('Top 10 Regions not in the hierarchical cluster:', not_in_list)
This report aims to select 10 potential regions for the next wine promotions to maximize the profits. The chosen methods are K-means clustering and hierarchical clustering. Regarding K-means clustering, this report decided to proceed with K-means clustering with 4 clusters by the Elbow method. The K-means clustering with historical wine sales selects 17 regions with the same wine buying behavior as Saint Petersburg.
In order to figure out which 10 regions have the highest profitability. This report selects the regions by regression analysis. The higher the correlation coefficient, the higher the potential profit. Furthermore, by taking into account the wine sales, this report is able to pick the top 10 regions as below.
As for the hierarchical clustering, this report uses a table with recent sales of all alcohol. After splitting the regions into 4 clusters, there are 34 regions picked by the sales combination. Since clustering by historical sales should more accurately reflect the buying behavior than clustering by only the recent sales, this report takes the top 10 regions clustered by K-means clustering with historical sales as a primary list.
By comparing the top 10 regions with the region list picked by the hierarchical clustering, this report is able to recommend that 6 regions should be the primary regions for the wine promotions.
Then, below 4 regions can be set as the lower priority in the top 10 regions.
Since the budget is limited, it will be beneficial that this report suggests a priority list. This approach would help the operation team to plan risk management. Also, it will help the project manager create a better plan for setting up the priority of tasks.
According to the heatmap of the clusters selected by K-means, the 4 clusters have totally different buying trends. Clearly, the K-means clustering does split the regions properly. However, clustering by other methods will still be nice to have. Further analysis can try other methods and then compare different results to seek whether there is a better solution.
Brown, M. (2019). Innovation in Marketing - CIM Official Module Guide. Berkshire: The Chartered Institute of Marketing.
Chaffey, D. (2019). Digital Marketing: Strategy, Implementation and Practice (7th ed.). Harlow: Pearson Education.
Flood, B. (2018). K-means Clustering: Algorithm, Applications, Evaluation Methods, and Drawbacks. Towards Data Science. Retrieved from https://towardsdatascience.com/
Garbade, M. J. (2018). Understanding K-means Clustering in Machine Learning. Towards Data Science. Retrieved from https://towardsdatascience.com/